AAAI.2018 - Search and Constraint Satisfaction

Total: 13

#1 Sweep-Based Propagation for String Constraint Solving [PDF] [Copy] [Kimi]

Authors: Roberto Amadini ; Graeme Gange ; Peter Stuckey

Solving constraints over strings is an emerging important field. Recently, a Constraint Programming approach based on dashed strings has been proposed to enable a compact domain representation for potentially large bounded-length string variables. In this paper, we present a more efficient algorithm for propagating equality (and related constraints) over dashed strings. We call this propagation sweep-based. Experimental evidences show that sweep-based propagation is able to significantly outperform state-of-the-art approaches for string constraint solving.

#2 Exact MAP-Inference by Confining Combinatorial Search With LP Relaxation [PDF] [Copy] [Kimi]

Authors: Stefan Haller ; Paul Swoboda ; Bogdan Savchynskyy

We consider the MAP-inference problem for graphical models, which is a valued constraint satisfaction problem defined on real numbers with a natural summation operation. We propose a family of relaxations (different from the famous Sherali-Adams hierarchy), which naturally define lower bounds for its optimum. This family always contains a tight relaxation and we give an algorithm able to find it and therefore, solve the initial non-relaxed NP-hard problem. The relaxations we consider decompose the original problem into two non-overlapping parts: an easy LP-tight part and a difficult one. For the latter part a combinatorial solver must be used. As we show in our experiments, in a number of applications the second, difficult part constitutes only a small fraction of the whole problem. This property allows to significantly reduce the computational time of the combinatorial solver and therefore solve problems which were out of reach before.

#3 A SAT+CAS Method for Enumerating Williamson Matrices of Even Order [PDF] [Copy] [Kimi]

Authors: Curtis Bright ; Ilias Kotsireas ; Vijay Ganesh

We present for the first time an exhaustive enumeration of Williamson matrices of even order n < 65. The search method relies on the novel SAT+CAS paradigm of coupling SAT solvers with computer algebra systems so as to take advantage of the advances made in both the field of satisfiability checking and the field of symbolic computation. Additionally, we use a programmatic SAT solver which allows conflict clauses to be learned programmatically, through a piece of code specifically tailored to the domain area. Prior to our work, Williamson matrices had only been enumerated for odd orders n < 60, so our work increases the bounds that Williamson matrices have been enumerated up to and provides the first enumeration of Williamson matrices of even order. Our results show that Williamson matrices of even order tend to be much more abundant than those of odd orders. In particular, Williamson matrices exist for every even order n < 65 but do not exist in orders 35, 47, 53, and 59.

#4 MaxSAT Resolution With the Dual Rail Encoding [PDF] [Copy] [Kimi]

Authors: Maria Luisa Bonet ; Sam Buss ; Alexey Ignatiev ; Joao Marques-Silva ; Antonio Morgado

Conflict-driven clause learning (CDCL) is at the core of the success of modern SAT solvers. In terms of propositional proof complexity, CDCL has been shown as strong as general resolution. Improvements to SAT solvers can be realized either by improving existing algorithms, or by exploiting proof systems stronger than CDCL. Recent work proposed an approach for solving SAT by reduction to Horn MaxSAT. The proposed reduction coupled with MaxSAT resolution represents a new proof system, DRMaxSAT, which was shown to enable polynomial time refutations of pigeonhole formulas, in contrast with either CDCL or general resolution. This paper investigates the DRMaxSAT proof system, and shows that DRMaxSAT p-simulates general resolution, that AC0-Frege+PHP p-simulates DRMaxSAT, and that DRMaxSAT can not p-simulate AC0-Frege+PHP or the cutting planes proof system.

#5 On Cryptographic Attacks Using Backdoors for SAT [PDF] [Copy] [Kimi]

Authors: Alexander Semenov ; Oleg Zaikin ; Ilya Otpuschennikov ; Stepan Kochemazov ; Alexey Ignatiev

Propositional satisfiability (SAT) is at the nucleus of state-of-the-art approaches to a variety of computationally hard problems, one of which is cryptanalysis. Moreover, a number of practical applications of SAT can only be tackled efficiently by identifying and exploiting a subset of formula's variables called backdoor set (or simply backdoors). This paper proposes a new class of backdoor sets for SAT used in the context of cryptographic attacks, namely guess-and-determine attacks. The idea is to identify the best set of backdoor variables subject to a statistically estimated hardness of the guess-and-determine attack using a SAT solver. Experimental results on weakened variants of the renowned encryption algorithms exhibit advantage of the proposed approach compared to the state of the art in terms of the estimated hardness of the resulting guess-and-determine attacks.

#6 Verifying Properties of Binarized Deep Neural Networks [PDF] [Copy] [Kimi]

Authors: Nina Narodytska ; Shiva Kasiviswanathan ; Leonid Ryzhyk ; Mooly Sagiv ; Toby Walsh

Understanding properties of deep neural networks is an important challenge in deep learning. In this paper, we take a step in this direction by proposing a rigorous way of verifying properties of a popular class of neural networks, Binarized Neural Networks, using the well-developed means of Boolean satisfiability. Our main contribution is a construction that creates a representation of a binarized neural network as a Boolean formula. Our encoding is the first exact Boolean representation of a deep neural network. Using this encoding, we leverage the power of modern SAT solvers along with a proposed counterexample-guided search procedure to verify various properties of these networks. A particular focus will be on the critical property of robustness to adversarial perturbations. For this property, our experimental results demonstrate that our approach scales to medium-size deep neural networks used in image classification tasks. To the best of our knowledge, this is the first work on verifying properties of deep neural networks using an exact Boolean encoding of the network.

#7 Community-Based Trip Sharing for Urban Commuting [PDF] [Copy] [Kimi]

Authors: Mohd. Hafiz Hasan ; Pascal Van Hentenryck ; Ceren Budak ; Jiayu Chen ; Chhavi Chaudhry

This paper explores Community-Based Trip Sharing which uses the structure of communities and commuting patterns to optimize car or ride sharing for urban communities. It introduces the Commuting Trip Sharing Problem (CTSP) and proposes an optimization approach to maximize trip sharing. The optimization method, which exploits trip clustering, shareability graphs, and mixed-integer programming, is applied to a dataset of 9000 daily commuting trips from a mid-size city. Experimental results show that community-based trip sharing reduces daily car usage by up to 44%, thus producing significant environmental and traffic benefits and reducing parking pressure. The results also indicate that daily flexibility in pairing cars and passengers has significant impact on the benefits of the approach, revealing new insights on commuting patterns and trip sharing.

#8 Towards Generalization in QBF Solving via Machine Learning [PDF] [Copy] [Kimi]

Author: Mikoláš Janota

There are well known cases of Quantified Boolean Formulas (QBFs) that have short winning strategies (Skolem/Herbrand functions) but that are hard to solve by nowadays solvers. This paper argues that a solver benefits from generalizing a set of individual wins into a strategy. This idea is realized on top of the competitive RAReQS algorithm by utilizing machine learning, which enables learning shorter strategies. The implemented prototype QFUN has won the first place in the non-CNF track of the most recent QBF competition.

#9 Schur Number Five [PDF] [Copy] [Kimi]

Author: Marijn Heule

We present the solution of a century-old problem known as Schur Number Five: What is the largest (natural) number n such that there exists a five-coloring of the positive numbers up to n without a monochromatic solution of the equation a + b = c? We obtained the solution, n = 160, by encoding the problem into propositional logic and applying massively parallel satisfiability solving techniques on the resulting formula. We also constructed and validated a proof of the solution to increase trust in the correctness of the multi-CPU-year computations. The proof is two petabytes in size and was certified using a formally verified proof checker, demonstrating that any result by satisfiability solvers---no matter how large---can now be validated using highly trustworthy systems.

#10 Parallel Algorithms for Operations on Multi-Valued Decision Diagrams [PDF] [Copy] [Kimi]

Authors: Guillaume Perez ; Jean-Charles Régin

Multi-valued Decision Diagrams (MDDs) have been extensively studied in the last ten years. Recently, efficient algorithms implementing operators such as reduction, union, intersection, difference, etc., have been designed. They directly deal with the graph structure of the MDD and a time reduction of several orders of magnitude in comparison to other existing algorithms have been observed. These operators have permitted a new look at MDDs, because extremely large MDDs can finally be manipulated as shown by the models used to solve complex application in music generation. However, MDDs become so large (50GB) that minutes are sometimes required to perform some operations. In order to accelerate the manipulation of MDDs, parallel algorithms are required. In this paper, we introduce such algorithms. We carefully design them in order to overcome inherent difficulties of the parallelization of sequential algorithms such as data dependencies, software lock-out, false sharing, or load balancing. As a result, we observe a speed-up , i.e. ratio between parallel and sequential runtimes, growing linearly with the number of cores.

#11 Learning Robust Search Strategies Using a Bandit-Based Approach [PDF] [Copy] [Kimi]

Authors: Wei Xia ; Roland Yap

Effective solving of constraint problems often requires choosing good or specific search heuristics. However, choosing or designing a good search heuristic is non-trivial and is often a manual process. In this paper, rather than manually choosing/designing search heuristics, we propose the use of bandit-based learning techniques to automatically select search heuristics. Our approach is online where the solver learns and selects from a set of heuristics during search. The goal is to obtain automatic search heuristics which give robust performance. Preliminary experiments show that our adaptive technique is more robust than the original search heuristics. It can also outperform the original heuristics.

#12 Enhancing Constraint-Based Multi-Objective Combinatorial Optimization [PDF] [Copy] [Kimi]

Authors: Miguel Terra-Neves ; Inês Lynce ; Vasco Manquinho

Minimal Correction Subsets (MCSs) have been successfully applied to find approximate solutions to several real-world single-objective optimization problems. However, only recently have MCSs been used to solve Multi-Objective Combinatorial Optimization (MOCO) problems. In particular, it has been shown that all optimal solutions of MOCO problems with linear objective functions can be found by an MCS enumeration procedure. In this paper, we show that the approach of MCS enumeration can also be applied to MOCO problems where objective functions are divisions of linear expressions. Hence, it is not necessary to use a linear approximation of these objective functions. Additionally, we also propose the integration of diversification techniques on the MCS enumeration process in order to find better approximations of the Pareto front of MOCO problems. Finally, experimental results on the Virtual Machine Consolidation (VMC) problem show the effectiveness of the proposed techniques.

#13 Premise Set Caching for Enumerating Minimal Correction Subsets [PDF] [Copy] [Kimi]

Authors: Alessandro Previti ; Carlos Mencía ; Matti Järvisalo ; Joao Marques-Silva

Methods for explaining the sources of inconsistency of overconstrained systems find an ever-increasing number of applications, ranging from diagnosis and configuration to ontology debugging and axiom pinpointing in description logics. Efficient enumeration of minimal correction subsets (MCSes), defined as sets of constraints whose removal from the system restores feasibility, is a central task in such domains. In this work, we propose a novel approach to speeding up MCS enumeration over conjunctive normal form propositional formulas by caching of so-called premise sets (PSes) seen during the enumeration process. Contrasting to earlier work, we move from caching unsatisfiable cores to caching PSes and propose a more effective way of implementing the cache. The proposed techniques noticeably improves on the performance of state-of-the-art MCS enumeration algorithms in practice.